24 research outputs found

    Network link dimensioning : a measurement & modeling based approach

    Get PDF
    Adequate network link dimensioning requires a thorough insight into the interrelationship between: (i) the traffic offered (in terms of the average load, but also its fluctuations), (ii) the desired level of performance, and (iii) the required bandwidth capacity. It is clear that more capacity is needed when the average traffic load becomes higher, the fluctuations become fiercer, or the performance criterion becomesmore stringent. Existing approaches to network link dimensioning are often based on rules of thumb, e.g., `take the average traffic rate at timeswhen the network is relatively busy, and add 30%to cater for fluctuations¿. Clearly, such an approach does not explicitly incorporate the fierceness of the traffic rate¿s fluctuations, or the desired level of performance.\ud A common approach to estimate the average traffic rate is as follows. A networkmanager regularly polls the so-called Interfaces Group MIB via the Simple NetworkManagement Protocol (SNMP), for instance through a tool such as the Multi-Router Traffic Grapher (MRTG). This yields the average rate of the offered traffic since the last poll. The polling interval generally is in the order of 5minutes. Evidently, the fierceness of fluctuation of the traffic rate within these 5 minute intervals is unknown to the network manager. These fluctuations may, however, be considerably large, and noticeable to users of the network. If, at timescales of say 5 seconds, more traffic is offered to a network link than it can transfer during that interval, traffic may be lost. Such loss is generally known as possibly leading to performance degradation and this may well be noticeable to a network user; for instance, entirewords may be lost in a (voice) conversation. Hence, it is in the interest of network users, and for obvious business reasons also to network operators, to have sufficient bandwidth capacity available to meet the demand at timescales considerably smaller than 5 minutes. \ud In this thesis, we develop an alternative approach to network link dimensioning, which explicitly incorporates the offered traffic in terms of both the average rate as well as its fluctuations at small timescales, and the desired level of performance. This is expressed throughmathematical formulas that give the required bandwidth capacity, given the characteristics of the offered traffic, and the performance criterion

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship between the traffic offered (in terms of the mean offered load M, but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulae that estimate the required capacity C as a function of the input traffic and the performance target. For the special case of Gaussian input traffic these formulae reduce to C = M+V , where directly relates to the performance requirement (as agreed upon in a service level agreement) and V reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level the Gaussianity assumption is justified.\ud As estimating M is relatively straightforward, the remaining open issue concerns the estimation of V . We argue that, particularly if V corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of V is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Assessing unknown network traffic

    Get PDF
    Recent measurements have shown that a growing fraction of all Internet traffic is unknown: it is unclear which applications are causing the traffic. Therefore we have developed and applied a novel methodology to find out what applications are running on the network. This methodology is based on the notion of ¿induced traffic¿: traffic cannot (wide-scale) be on unknown ports, thus, \ud the hypothesis is that such traffic on unknown ports should be preceeded by traffic on known ports between the same peers. We have developed and implemented an algorithm to test this hypothesis. After applying the algorithm in two case studies we, unfortunately, have to conclude that although some improvement is made, there is still a significant fraction of traffic unidentifiable

    Resource dimensioning through buffer sampling

    Get PDF
    Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity cc as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to c=M+αVc = M + \alpha V, where directly relates to the performance requirement (as agreed upon in a service level agreement) and VV reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud As estimating MM is relatively straightforward, the remaining open issue concerns the estimation of VV. We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of VV is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate

    Smart Dimensioning of IP Network Links

    Get PDF
    Link dimensioning is generally considered as an effective and (operationally) simple mechanism to meet (given) performance requirements. In practice, the required link capacity C is often estimated by rules of thumb, such as C = d·M, where M is the (envisaged) average traffic rate, and d some (empirically determined) constant larger than 1. This paper studies the viability of this class of ‘simplistic’ dimensioning rules. Throughout, the performance criterion imposed is that the fraction of intervals of length T in which the input exceeds the vailable output capacity (i.e., CT) should not exceed ε\varepsilon, for given T and ε\varepsilon.\ud We first present a dimensioning formula that expresses the required link capacity as a function of M and a variance term V(T), which captures the burstiness on timescale T. We explain how M and V(T) can be estimated with low measurement effort. The dimensioning formula is then used to validate dimensioning rules of the type C = d·M. Our main findings are: (i) the factor d is strongly affected by the nature of the traffic, the level of aggregation, and the network infrastructure; if these conditions are more or less constant, one could empirically determine d; (ii) we can explicitly characterize how d is affected by the ‘performance parameters’, i.e., T and ε\varepsilon

    Simpleweb/University of Twente Traffic Traces Data Repository

    Get PDF
    The computer networks research community lacks of shared measurement information. As a consequence, most researchers need to expend a considerable part of their time planning and executing measurements before being able to perform their studies. The lack of shared data also makes it hard to compare and validate results. This report describes our efforts to distribute a portion of our network data through the Simpleweb/University of Twente Traffic Traces Data Repository

    Comparing the Performance of SNMP and Web Services-Based Management

    Get PDF
    his paper compares the performance of Web services based network monitoring to traditional, SNMP based, monitoring. The study focuses on the ifTable, and investigates performance as function of the number of retrieved objects. The following aspects are examined: bandwidth usage, CPU time, memory consumption and round trip delay. For our study several prototypes of Web services based agents were implemented; these prototypes can retrieve single ifTable elements, ifTable rows, ifTable columns or the entire ifTable. This paper presents a generic formula to calculate SNMP’s bandwidth requirements; the bandwidth consumption of our prototypes was compared to that formula. The CPU time, memory consumption and round trip delay of our prototypes was compared to Net-SNMP, as well as several other SNMP agents. Our measurements show that SNMP is more efficient in cases where only a single object is retrieved; for larger number of objects Web services may be more efficient. Our study also shows that, if performance is the issue, the choice between BER (SNMP) or XML (Web services) encoding is generally not the determining factor; other choices can have stronger impact on performance

    Comparing the performance of SNMP and Web services-based management

    Full text link

    Dimensioning network links: a new look at equivalent bandwidth

    Full text link
    corecore